posted 11-26-2006 06:16 PM
The problem is that our consumers don't always know what to properly expect from us, and have at times been misinformed in the past.In Colorado, it took a committee 2 years, until 2001/02 to get examiners to stop making split calls (and I still see it), and stop mixing sex history and maintenance polygraphs (and I still see that too, though the examiner is currently incarcerated on another matter).
ASTM doesn't refer to PCSOT directly, but regarding screening exams it directs that examiners should not render a deceptive opinion based only on the results of a screening test. It seems like a lot of folks are still figuring out that PCSOT tests are in fact screening tests. POs and other folks don't like that - they like simple answers.
It seems like stat has a different experience with a therapist who likes the polygraph as a form of equalizer or cudgel, but its so much interested in results (a reflection of some lack of confidence in the science of polygraph). Clearly there is a need to continue articulating an accurate understanding of polygraph in both policies and contracts.
Our solution to this is to create change through local standards, that indicate that PCSOT test results should be report in terms of SR/NSR. The presence of reactions is unequivocal - the meaning of those reactions is up to the offender to explain to us. Of course calling reactions "significant" instead of "deceptive" is just code, but it begs the question...
Significant by what standard?
And that brings us to the problem of science and measurement. OSS/OSS-2 are good empirical methods for single issue three-question ZCT formats, but we presently lack an empirically defensible description of how we attribute "significance" to our test scores.
For this very reason, I have been developing a tool-kit of statistical significance tests that can be easily recognized by any university level or advanced statistician or researcher - including parametric t-tests, nonparametric tests, and normalcy tests. CPS didn't do this, and instead used likelihood ratios and bayesian inference. Axciton does who-knows-what. Polyscore is a black box that is not very informative. I'm somewhat familiar with reading output of statistical software such as SPSS, SAS and minitab, but the output of polyscore is meaningless to me - and seems like a bunch of gee-wiz numbers (out of fairness, I haven't used recent versions). Identifi doesn't provide much information at all, but does seem to do a good job mimicking handscoring procedures.
So anyway back to the point.
If its not "deceptive," then what is it?
If we want to call it "significant," then we had better be prepared to eventually justify that. In the realm of science, the term "significant" carries certain expectations, just as the term "control" does.
Say what you want about rabid fascination with algorithms, but the solutions to these problems lie most likely in policies and standards of practice that are based on science and mathematics, and not some arcane shell-game in which we remain unable to convincingly articulate the empirical principles of our profession.
r
------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)